Product Code Database
Example Keywords: handheld -slacks $1-146
   » » Wiki: Shadow Banning
Tag Wiki 'Shadow Banning'.
Tag

Shadow banning, also known as stealth banning, hell banning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an . For example, shadow-banned posted to a or media would be visible to the sender, but not to other users accessing the site.

The phrase "shadow banning" has a colloquial history and has undergone some evolution of usage. It originally applied to a deceptive sort of account suspension on web forums, where a person would appear to be able to post while actually having all of their content hidden from other users. In 2022, the term has come to apply to alternative measures, particularly visibility measures like delisting and downranking.

By partly concealing, or making a user's contributions invisible or less prominent to other members of the service, the hope may be that in the absence of reactions to their comments, the problematic or otherwise out-of-favour user will become bored or frustrated and leave the site, and that and will be discouraged from continuing their unwanted behavior or creating new accounts.


History
In the mid-1980s, BBS forums including Citadel BBS software had a "twit bit" for problematic users which, when enabled, would limit the user's access while still allowing them to read public discussions; however, any messages posted by that "twit" would not be visible to the other members of that group.

The term "shadow ban" is believed to have originated with moderators on the website in 2001, although the feature was only used briefly and sparsely.

Michael Pryor of Fog Creek Software described stealth banning for online forums in 2006, saying how such a system was in place in the project management system , "to solve the problem of how do you get the person to go away and leave you alone". As well as preventing problem users from engaging in , the system also discouraged , who if they returned to the site would be under the false impression that their spam was still in place.

(2006). 9781430201144, Apress. .
describes it as "one of the oldest moderation tricks in the book", noting that early versions of had a global ignore list known as "Tachy goes to Coventry", as in the British expression "to send someone to Coventry", meaning to ignore them and pretend they do not exist.

A 2012 update to introduced a system of "hellbanning" for spamming and abusive behavior.

Early on, implemented (and continues to practice) shadow banning, purportedly to address spam accounts. In 2015, Reddit added an account suspension feature that was said to have replaced its sitewide shadowbans, though moderators can still shadowban users from their individual subreddits via their AutoModerator configuration as well as manually. A Reddit user was accidentally shadow banned for one year in 2019, subsequently they contacted support and their comments were restored.

A study of tweets written in a one-year period during 2014 and 2015 found that over a quarter million tweets had been censored in Turkey via shadow banning.

(2015). 9781450338202, ACM.
was also found, in 2015, to have shadowbanned tweets containing leaked documents in the US.

has also been known to "ghost" a user's individual ads, whereby the poster gets a confirmation email and may view the ad in their account, but the ad fails to show up in the appropriate category page.

was found in 2016 to have banned, without any notification to the user, posts and messages that contain various combinations of at least 174 keywords, including "习包子" ( ), "六四天安门" (June 4 Tiananmen), "藏青会" (Tibetan Youth Congress), and "ئاللاھ يولىدا" (in the way of ). Messages containing these keywords would appear to have been sent successfully but would not be visible on the receiving end.

In 2017, the phenomenon was noticed on , with posts which included specific hashtags not showing up when those hashtags were used in searches.

In December 2023, Human Rights Watch echoed the complaints of many and users who alleged a drastic reduction in visits to their posts and profiles when the content they posted was about or the , without prior notification from . The Markup's investigation confirmed that posts with war-related imagery or pro-Palestine hashtags were demoted. Hashtags like "#Palestine" or "#AlAqsa" were supressed from the "Top Posts" section. Meta responded by claiming that this was due to a bug on the platform, which led to criticism about possible in the .


Drawbacks
Given that shadow bans are mostly executed by automatic without initial human intervention, and that the conditions for imposing them can be quite complex, there is always a percentage of where a user is shadow banned even when the user did nothing wrong.

Because the shadow ban happens without the user being informed, the incorrectly banned user will not have a chance to contact the platform to revert it, unless the user finds out by their own means.

Another instance where shadow bans are problematic is when a user actually broke a rule, but it was in an unintended way, or in a way that did not imply bad intention nor was damaging for the . For example, a user wrote a comment in an online platform, and the comment contained an to some legit not source. The algorithm patrolling comments, instead of informing the user that URLs were not allowed and preventing the user from posting, would let the user post the comment with the URL but hiding the comment for everyone to see except the original poster.

If the user was informed beforehand about the rules broken, the user would have been able to write an compliant comment and avoid the shadow ban.

A wrongful ban is always undesired regardless of the reason, and creates an erosion of trust that disincentivises the user from further engaging with the platform. This is a negative loss when the user shadow banned was a good contributor to the platform.


Legality
Although shadow banning can be an effective moderation tool, it can also have legal implications. If the platform implementing shadow banning does not mention such practice in their terms and conditions, it could effectively mean that the platform denied a service for no disclosed reason, and hence being in breach of contract.

In the , the Digital Services Act (DSA) contains Article 17 that directly addresses moderation practices and service restrictions, forcing platforms to disclose the reasons for such restrictions:

Providers of hosting services shall provide a clear and specific statement of reasons to any affected recipients of the service for any of the following restrictions imposed on the ground that the information provided by the recipient of the service is illegal content or incompatible with their terms and conditions.

In 2024 a Dutch user of , under the DSA, sued this platform with the European small claims procedure in the Amsterdam District Court for breach of contract, and won the case. The plaintiff claimed that under the Article 17 DSA Twitter failed to proactively notify and provide a "clear and specific statement of reasons" for the demotion of his account, as it is required by this article. On its defence, Twitter claimed that in its terms and conditions there were clauses that allow them to modify access to functionalities and other obligations at any time. But the court deemed these clauses non-binding under the Unfair Terms in Consumer Contracts Directive, and hence dismissed its defence.

Another legal implication is a perceived violation of freedom of speech, depending on how this principle is codified in regulations around the world. In the the DSA effectively bans shadow banning because Article 17 requires platforms to always disclose the reasons for a ban or restriction. In practice however this is not enforced most of the time. Conversely, the First Amendment to the United States Constitution does not protect users' freedom of speech from shadow banning because its coverage only applies to American government interference, not to third-party private entities (like social networks).


Controversies

Political controversies
"Shadow banning" became popularized in 2018 as a conspiracy theory when Twitter shadow-banned some Republicans. In late July 2018, found that several supporters of the US Republican Party no longer appeared in the auto-populated drop-down search menu on Twitter, thus limiting their visibility when being searched for; Vice News alleged that this was a case of shadow-banning. After the story, some conservatives accused Twitter of enacting a shadowban on Republican accounts, a claim which Twitter denied. However, some accounts that were not overtly political or conservative apparently had the same algorithm applied to them. Numerous news outlets, including The New York Times, , , Engadget and New York magazine, disputed the Vice News story. In a blog post, Twitter said that the use of the phrase "shadow banning" was inaccurate, as the tweets were still visible by navigating to the home page of the relevant account. In the blog post, Twitter claims it does not shadow ban by using "the old, narrow, and classical" definition of shadow banning. Later, Twitter appeared to have adjusted its platform to no longer limit the visibility of some accounts. In a research study that examined more than 2.5 million Twitter profiles, it was discovered that almost one in 40 had been shadowbanned by having their replies hidden or having their handles hidden in searches.

During the 2020 Twitter account hijackings, hackers successfully managed to obtain access to Twitter's internal moderation tools via both social engineering and bribing a Twitter employee. Through this, images were leaked of an internal account summary page, which in turn revealed user "flags" set by the system that confirmed the existence of shadow bans on Twitter. Accounts were flagged with terms such as "Trends Blacklisted" and "Search Blacklisted" implying that the user was not able to publicly trend, or show up in public search results. After the situation was dealt with, Twitter faced accusations of censorship with claims that they were trying to hide the existence of shadow bans by removing tweets that contained images of the internal staff tools used. However, Twitter claimed they were removed as they revealed sensitive user information.

On December 8, 2022, the second thread of the —a series of Twitter threads based on internal Twitter, Inc. documents shared by owner with independent journalists and —addressed a practice referred to as "visibility filtering" by previous Twitter management. The functionality included tools allowing accounts to be tagged as "Do not amplify", and under "blacklists" that reduce their prominence in search results and trending topics. It was also revealed that certain conservative accounts, such as the far-right Libs of TikTok, had been given a warning stating that decisions regarding them should only be made by Twitter's Site Integrity Policy, Policy Escalation Support (SIP–PES) team—which consists primarily of high-ranking officials. The functions were given by Musk and other critics as examples of "shadow banning".


Conspiracy theories
A form of conspiracy theory has become popular in which a suggests that their content has been intentionally suppressed by a platform which claims not to engage in shadow banning. Platforms frequently targeted by these accusations include , , and .

To explain why users may come to believe they are subject to "shadow bans" even when they are not, Elaine Moore of the writes:


See also
  • Ban (law)
  • Block (Internet)
  • Internet censorship
  • Section 230
  • Terms of service
  • Twitter suspensions
  • Usenet Death Penalty

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
1s Time